-
Notifications
You must be signed in to change notification settings - Fork 23
do not try another peer on a local error #440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
do not try another peer on a local error #440
Conversation
|
🤖 Claude Code Review Status: Complete Current Review: Found a format string bug in 4 error messages (lines 572, 578, 609, 620) where each has one History:
|
| // Local errors include context cancellation, semaphore exhaustion, and storage errors | ||
| // that are caused by local resource constraints rather than peer failures. | ||
| // These errors should not trigger peer failover since trying another peer won't help. | ||
| func isLocalError(err error) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move to Error package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| } | ||
| // Check if error is local (not peer-related) - don't retry with other peers | ||
| if errors.IsLocalError(err) { | ||
| return errors.NewServiceError("[catchup:fetchAndStoreSubtreeAndSubtreeData] Local error fetching subtreeData for %s (not retrying with other peers)", subtreeHash.String(), err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Format string bug: has one %s placeholder but passes two arguments (hash and err). The error will be ignored. Need to add %v placeholder for the error.
|


While running teratestnet and syncing, I could semaphore errors. blockvalidation was deciding the problem was with the peer and started over with a new peer. The problem in this instance was in my own teranode so it shouldn't pick another peer and possible downgrade the 'bad' peer.